Video Event Detection |
Return |
Introduction
Recent years have witnessed the explosion of multimedia contents on the web. For example, YouTube, one of the most popular video sharing websites, serves over 100 million distinct videos and 65 000 uploads every day. The growing number of videos has motivated a real necessity to provide effective tools to support retrieval and browsing. However, given an event query, search engines may return thousands or even more videos that are diverse and noisy. The evolution of the entire event is not directly observable by simply watching these videos. Even worse, some videos are indeed weakly or not relevant to the query. These facts distract users from the key point of the event and force them to take a long time to explore the returned videos for an overview of the event. Video summarization is one of the promising ways to generate a condensed and succinct response to the event query.
Framework
- Copy detection and filtering
- Key shots extraction
- Video Summarization
As illustrated in Fig.1, the framework of this project consists of three parts, including copy detection and filtering, key shots extraction and video summarization.
Copy detection technology is used to detect same videos from massive internet multimedia data. And in this stage we will also remove the videos that are not related to the topic.
Algorithms such as NDK are used in key shots extraction. This stage aims to extract distinct key shots from abundant videos.
In this phase, firstly we use tag localization technology to tag the key shots we extracted from previous stages. And then we rank the key shots according to its occurrence. With the strategy above we can generate a story board about this topic.